Semantically Contrastive Learning for Low-Light Image Enhancement

نویسندگان

چکیده

Low-light image enhancement (LLE) remains challenging due to the unfavorable prevailing low-contrast and weak-visibility problems of single RGB images. In this paper, we respond intriguing learning-related question -- if leveraging both accessible unpaired over/underexposed images high-level semantic guidance, can improve performance cutting-edge LLE models? Here, propose an effective semantically contrastive learning paradigm for (namely SCL-LLE). Beyond existing wisdom, it casts task as multi-task joint learning, where is converted into three constraints brightness consistency, feature preservation simultaneously ensuring exposure, texture, color consistency. SCL-LLE allows model learn from positives (normal-light)/negatives (over/underexposed), enables interact with scene semantics regularize network, yet interaction knowledge low-level signal prior seldom investigated in previous methods. Training on readily available open data, extensive experiments demonstrate that our method surpasses state-of-the-arts models over six independent cross-scenes datasets. Moreover, SCL-LLE's potential benefit downstream segmentation under extremely dark conditions discussed. Source Code: https://github.com/LingLIx/SCL-LLE.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Contrastive Learning for Image Captioning

Image captioning, a popular topic in computer vision, has achieved substantial progress in recent years. However, the distinctiveness of natural descriptions is often overlooked in previous work. It is closely related to the quality of captions, as distinctive captions are more likely to describe images with their unique aspects. In this work, we propose a new learning method, Contrastive Learn...

متن کامل

Low Light Image Enhancement via Sparse Representations

Enhancing the quality of low light images is a critical processing function both from an aesthetics and an information extraction point of view. This work proposes a novel approach for enhancing images captured under low illumination conditions based on the mathematical framework of Sparse Representations. In our model, we utilize the sparse representation of low light image patches in an appro...

متن کامل

Low-Light Image Enhancement Using Adaptive Digital Pixel Binning

This paper presents an image enhancement algorithm for low-light scenes in an environment with insufficient illumination. Simple amplification of intensity exhibits various undesired artifacts: noise amplification, intensity saturation, and loss of resolution. In order to enhance low-light images without undesired artifacts, a novel digital binning algorithm is proposed that considers brightnes...

متن کامل

MSR-net: Low-light Image Enhancement Using Deep Convolutional Network

Images captured in low-light conditions usually suffer from very low contrast, which increases the difficulty of subsequent computer vision tasks in a great extent. In this paper, a low-light image enhancement model based on convolutional neural network and Retinex theory is proposed. Firstly, we show that multi-scale Retinex is equivalent to a feedforward convolutional neural network with diff...

متن کامل

Video Enhancement For Low Light Environment

Digital video has become an integral part of everyday life. It is well-known that video enhancement as an active topic in computer vision has received much attention in recent years .In this paper we are basically focusing on enhancement of video that taken into the low light conditions for this Video footage recorded in very dim light is especially targeted. Enhancement of low-light video is d...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2022

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v36i2.20046